监测普遍的空气传播疾病,例如COVID-19的特征涉及呼吸评估。虽然听诊是一种症状监测的主流方法,但其诊断效用受到专用医院就诊的需求而受到阻碍。基于便携式设备上呼吸道声音的记录,持续的远程监视是一种有希望的替代方法,可以帮助筛选Covid-19。在这项研究中,我们介绍了一种新型的深度学习方法,可以将Covid-19患者与健康对照组区分开,鉴于咳嗽或呼吸声的音频记录。所提出的方法利用新型的层次谱图变压器(HST)在呼吸声的光谱图表示上。 HST在频谱图中体现了在本地窗口上的自我发挥机制,并且窗口大小在模型阶段逐渐生长,以捕获本地环境。将HST与最新的常规和深度学习基线进行比较。在跨国数据集上进行的全面演示表明,HST优于竞争方法,在检测COVID-19案例中,在接收器操作特征曲线(AUC)下达到了97%以上的面积。
translated by 谷歌翻译
Automated Machine Learning (AutoML) has been used successfully in settings where the learning task is assumed to be static. In many real-world scenarios, however, the data distribution will evolve over time, and it is yet to be shown whether AutoML techniques can effectively design online pipelines in dynamic environments. This study aims to automate pipeline design for online learning while continuously adapting to data drift. For this purpose, we design an adaptive Online Automated Machine Learning (OAML) system, searching the complete pipeline configuration space of online learners, including preprocessing algorithms and ensembling techniques. This system combines the inherent adaptation capabilities of online learners with the fast automated pipeline (re)optimization capabilities of AutoML. Focusing on optimization techniques that can adapt to evolving objectives, we evaluate asynchronous genetic programming and asynchronous successive halving to optimize these pipelines continually. We experiment on real and artificial data streams with varying types of concept drift to test the performance and adaptation capabilities of the proposed system. The results confirm the utility of OAML over popular online learning algorithms and underscore the benefits of continuous pipeline redesign in the presence of data drift.
translated by 谷歌翻译
机器人的长期愿景是装备机器人,技能与人类的多功能性和精度相匹配。例如,在播放乒乓球时,机器人应该能够以各种方式返回球,同时精确地将球放置在所需位置。模拟这种多功能行为的常见方法是使用专家(MOE)模型的混合,其中每个专家是一个上下文运动原语。然而,由于大多数目标强迫模型涵盖整个上下文空间,因此学习此类MOS是具有挑战性的,这可以防止基元的专业化导致相当低质量的组件。从最大熵增强学习(RL)开始,我们将目标分解为优化每个混合组件的个体下限。此外,我们通过允许组件专注于本地上下文区域来介绍课程,使模型能够学习高度准确的技能表示。为此,我们使用与专家原语共同调整的本地上下文分布。我们的下限主张迭代添加新组件,其中新组件将集中在当前MOE不涵盖的本地上下文区域上。这种本地和增量学习导致高精度和多功能性的模块化MOE模型,其中可以通过在飞行中添加更多组件来缩放两个属性。我们通过广泛的消融和两个具有挑战性的模拟机器人技能学习任务来证明这一点。我们将我们的绩效与Live和Hireps进行了比较,这是一个已知的分层政策搜索方法,用于学习各种技能。
translated by 谷歌翻译
寻找最佳个性化的治疗方案被认为是最具挑战性的精确药物问题之一。各种患者特征会影响对治疗的反应,因此,没有一种尺寸适合 - 所有方案。此外,甚至在治疗过程中均不服用单一不安全剂量可能对患者的健康产生灾难性后果。因此,个性化治疗模型必须确保患者{\ EM安全} {\ EM有效}优化疗程。在这项工作中,我们研究了一种普遍的和基本的医学问题,其中治疗旨在在范围内保持生理变量,优选接近目标水平。这样的任务也与其他域中相关。我们提出ESCADA,这是一个用于这个问题结构的通用算法,在确保患者安全的同时制作个性化和背景感知最佳剂量推荐。我们在Escada的遗憾中获得了高概率的上限以及安全保证。最后,我们对1型糖尿病疾病的{\ em推注胰岛素剂量}分配问题进行了广泛的模拟,并比较ESCADA对汤普森采样,规则的剂量分配者和临床医生的表现。
translated by 谷歌翻译
Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24% of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.
translated by 谷歌翻译
Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.
translated by 谷歌翻译